7 research outputs found

    A survey on computer vision technology in Camera Based ETA devices

    Get PDF
    Electronic Travel Aid systems are expected to make impaired persons able to perform their everyday tasks such as finding an object and avoiding obstacles easier. Among ETA devices, Camera Based ETA devices are the new one and with a high potential for helping Visually Impaired Persons. With recent advances in computer science and specially computer vision, Camera Based ETA devices used several computer vision algorithms and techniques such as object recognition and stereo vision in order to help VIP to perform tasks such as reading banknotes, recognizing people and avoiding obstacles. This paper analyses and appraises a number of literatures in this area with focus on stereo vision technique. Finally, after discussing about the methods and techniques used in different literatures, it is concluded that the stereo vision is the best technique for helping VIP in their everyday navigation

    What Are Some of the Best Ways for a Machine to Learn?

    No full text
    Artificial intelligence recently had a great advancements caused by the emergence of new processing power and machine learning methods. Having said that, the learning capability of artificial intelligence is still at its infancy comparing to the learning capability of human and many animals. Many of the current artificial intelligence applications can only operate in a very orchestrated, specific environments with an extensive training set that exactly describes the conditions that will occur during execution time. Having that in mind, and considering the several existing machine learning methods this question rises that 'What are some of the best ways for a machine to learn?' Regarding the learning methods of human, Confucius' point of view is that they are by experience, imitation and reflection. This paper tries to explore and discuss regarding these three ways of learning and their implementations in machines by having a look at how they happen in minds

    What Are Some of the Best Ways for a Machine to Learn?

    No full text
    Artificial intelligence recently had a great advancements caused by the emergence of new processing power and machine learning methods. Having said that, the learning capability of artificial intelligence is still at its infancy comparing to the learning capability of human and many animals. Many of the current artificial intelligence applications can only operate in a very orchestrated, specific environments with an extensive training set that exactly describes the conditions that will occur during execution time. Having that in mind, and considering the several existing machine learning methods this question rises that 'What are some of the best ways for a machine to learn?' Regarding the learning methods of human, Confucius' point of view is that they are by experience, imitation and reflection. This paper tries to explore and discuss regarding these three ways of learning and their implementations in machines by having a look at how they happen in minds

    An End-to-End Deep Reinforcement Learning-Based Intelligent Agent Capable of Autonomous Exploration in Unknown Environments

    No full text
    In recent years, machine learning (and as a result artificial intelligence) has experienced considerable progress. As a result, robots in different shapes and with different purposes have found their ways into our everyday life. These robots, which have been developed with the goal of human companionship, are here to help us in our everyday and routine life. These robots are different to the previous family of robots that were used in factories and static environments. These new robots are social robots that need to be able to adapt to our environment by themselves and to learn from their own experiences. In this paper, we contribute to the creation of robots with a high degree of autonomy, which is a must for social robots. We try to create an algorithm capable of autonomous exploration in and adaptation to unknown environments and implement it in a simulated robot. We go further than a simulation and implement our algorithm in a real robot, in which our sensor fusion method is able to overcome real-world noise and perform robust exploration

    A Multi-Objective Reinforcement Learning Based Controller for Autonomous Navigation in Challenging Environments

    No full text
    In this paper, we introduce a self-trained controller for autonomous navigation in static and dynamic (with moving walls and nets) challenging environments (including trees, nets, windows, and pipe) using deep reinforcement learning, simultaneously trained using multiple rewards. We train our RL algorithm in a multi-objective way. Our algorithm learns to generate continuous action for controlling the UAV. Our algorithm aims to generate waypoints for the UAV in such a way as to reach a goal area (shown by an RGB image) while avoiding static and dynamic obstacles. In this text, we use the RGB-D image as the input for the algorithm, and it learns to control the UAV in 3-DoF (x, y, and z). We train our robot in environments simulated by Gazebo sim. For communication between our algorithm and the simulated environments, we use the robot operating system. Finally, we visualize the trajectories generated by our trained algorithms using several methods and illustrate our results that clearly show our algorithm’s capability in learning to maximize the defined multi-objective reward

    An End-to-End Deep Reinforcement Learning-Based Intelligent Agent Capable of Autonomous Exploration in Unknown Environments

    No full text
    In recent years, machine learning (and as a result artificial intelligence) has experienced considerable progress. As a result, robots in different shapes and with different purposes have found their ways into our everyday life. These robots, which have been developed with the goal of human companionship, are here to help us in our everyday and routine life. These robots are different to the previous family of robots that were used in factories and static environments. These new robots are social robots that need to be able to adapt to our environment by themselves and to learn from their own experiences. In this paper, we contribute to the creation of robots with a high degree of autonomy, which is a must for social robots. We try to create an algorithm capable of autonomous exploration in and adaptation to unknown environments and implement it in a simulated robot. We go further than a simulation and implement our algorithm in a real robot, in which our sensor fusion method is able to overcome real-world noise and perform robust exploration
    corecore